47 research outputs found

    Novel Lower Bounds on the Entropy Rate of Binary Hidden Markov Processes

    Full text link
    Recently, Samorodnitsky proved a strengthened version of Mrs. Gerber's Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the (1,∞)(1,\infty)-RLL constraint

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor

    How to Quantize nn Outputs of a Binary Symmetric Channel to nβˆ’1n-1 Bits?

    Full text link
    Suppose that YnY^n is obtained by observing a uniform Bernoulli random vector XnX^n through a binary symmetric channel with crossover probability Ξ±\alpha. The "most informative Boolean function" conjecture postulates that the maximal mutual information between YnY^n and any Boolean function b(Xn)\mathrm{b}(X^n) is attained by a dictator function. In this paper, we consider the "complementary" case in which the Boolean function is replaced by f:{0,1}nβ†’{0,1}nβˆ’1f:\left\{0,1\right\}^n\to\left\{0,1\right\}^{n-1}, namely, an nβˆ’1n-1 bit quantizer, and show that I(f(Xn);Yn)≀(nβˆ’1)β‹…(1βˆ’h(Ξ±))I(f(X^n);Y^n)\leq (n-1)\cdot\left(1-h(\alpha)\right) for any such ff. Thus, in this case, the optimal function is of the form f(xn)=(x1,…,xnβˆ’1)f(x^n)=(x_1,\ldots,x_{n-1}).Comment: 5 pages, accepted ISIT 201

    Cyclic-Coded Integer-Forcing Equalization

    Full text link
    A discrete-time intersymbol interference channel with additive Gaussian noise is considered, where only the receiver has knowledge of the channel impulse response. An approach for combining decision-feedback equalization with channel coding is proposed, where decoding precedes the removal of intersymbol interference. This is accomplished by combining the recently proposed integer-forcing equalization approach with cyclic block codes. The channel impulse response is linearly equalized to an integer-valued response. This is then utilized by leveraging the property that a cyclic code is closed under (cyclic) integer-valued convolution. Explicit bounds on the performance of the proposed scheme are also derived

    Precoded Integer-Forcing Universally Achieves the MIMO Capacity to Within a Constant Gap

    Full text link
    An open-loop single-user multiple-input multiple-output communication scheme is considered where a transmitter, equipped with multiple antennas, encodes the data into independent streams all taken from the same linear code. The coded streams are then linearly precoded using the encoding matrix of a perfect linear dispersion space-time code. At the receiver side, integer-forcing equalization is applied, followed by standard single-stream decoding. It is shown that this communication architecture achieves the capacity of any Gaussian multiple-input multiple-output channel up to a gap that depends only on the number of transmit antennas.Comment: to appear in the IEEE Transactions on Information Theor

    Successive Integer-Forcing and its Sum-Rate Optimality

    Full text link
    Integer-forcing receivers generalize traditional linear receivers for the multiple-input multiple-output channel by decoding integer-linear combinations of the transmitted streams, rather then the streams themselves. Previous works have shown that the additional degree of freedom in choosing the integer coefficients enables this receiver to approach the performance of maximum-likelihood decoding in various scenarios. Nonetheless, even for the optimal choice of integer coefficients, the additive noise at the equalizer's output is still correlated. In this work we study a variant of integer-forcing, termed successive integer-forcing, that exploits these noise correlations to improve performance. This scheme is the integer-forcing counterpart of successive interference cancellation for traditional linear receivers. Similarly to the latter, we show that successive integer-forcing is capacity achieving when it is possible to optimize the rate allocation to the different streams. In comparison to standard successive interference cancellation receivers, the successive integer-forcing receiver offers more possibilities for capacity achieving rate tuples, and in particular, ones that are more balanced.Comment: A shorter version was submitted to the 51st Allerton Conferenc
    corecore